67 research outputs found

    The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching

    Get PDF
    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration

    How the Emotional Content of Discourse Affects Language Comprehension

    Get PDF
    Emotion effects on cognition have often been reported. However, only few studies investigated emotional effects on subsequent language processing, and in most cases these effects were induced by non-linguistic stimuli such as films, faces, or pictures. Here, we investigated how a paragraph of positive, negative, or neutral emotional valence affects the processing of a subsequent emotionally neutral sentence, which contained either semantic, syntactic, or no violation, respectively, by means of event-related brain potentials (ERPs). Behavioral data revealed strong effects of emotion; error rates and reaction times increased significantly in sentences preceded by a positive paragraph relative to negative and neutral ones. In ERPs, the N400 to semantic violations was not affected by emotion. In the syntactic experiment, however, clear emotion effects were observed on ERPs. The left anterior negativity (LAN) to syntactic violations, which was not visible in the neutral condition, was present in the negative and positive conditions. This is interpreted as reflecting modulatory effects of prior emotions on syntactic processing, which is discussed in the light of three alternative or complementary explanations based on emotion-induced cognitive styles, working memory, and arousal models. The present effects of emotion on the LAN are especially remarkable considering that syntactic processing has often been regarded as encapsulated and autonomous

    Right hemisphere has the last laugh: neural dynamics of joke appreciation

    Get PDF
    Understanding a joke relies on semantic, mnemonic, inferential, and emotional contributions from multiple brain areas. Anatomically constrained magnetoencephalography (aMEG) combining high-density whole-head MEG with anatomical magnetic resonance imaging allowed us to estimate where the humor-specific brain activations occur and to understand their temporal sequence. Punch lines provided either funny, not funny (semantically congruent), or nonsensical (incongruent) replies to joke questions. Healthy subjects rated them as being funny or not funny. As expected, incongruous endings evoke the largest N400m in left-dominant temporo-prefrontal areas, due to integration difficulty. In contrast, funny punch lines evoke the smallest N400m during this initial lexical–semantic stage, consistent with their primed “surface congruity” with the setup question. In line with its sensitivity to ambiguity, the anteromedial prefrontal cortex may contribute to the subsequent “second take” processing, which, for jokes, presumably reflects detection of a clever “twist” contained in the funny punch lines. Joke-selective activity simultaneously emerges in the right prefrontal cortex, which may lead an extended bilateral temporo-frontal network in establishing the distant unexpected creative coherence between the punch line and the setup. This progression from an initially promising but misleading integration from left frontotemporal associations, to medial prefrontal ambiguity evaluation and right prefrontal reprocessing, may reflect the essential tension and resolution underlying humor

    Neuropragmatics and conversation: Experimental findings on action ascription

    No full text
    In order to produce relevant responses in conversation, participants monitor turns at talk for the actions they perform, actions such as requests, offers, complaints, etc. (Schegloff, 2007). However, the link between turn construction and action is not straightforward. Turns at talk do not contain discrete ‘illocutionary force indicators’, and what subtle action cues are available, such as interrogative syntax, can be overridden by top-down factors like epistemic status (Heritage, 2012). Given that utterances are often underspecified for action, how is it that participants recognize actions so efficiently, as evidenced by the extraordinarily fast transitions between turns (Sacks, Schegloff, & Jefferson, 1974; Stivers et al., 2009; Levinson, 2013)? As the first step in investigating the cognitive underpinnings of action recognition in conversation, we conducted Event-Related Potential (ERP) experiments using short scripted dialogues in Dutch. The excellent time resolution of ERPs allows us to track listeners’ brain responses as utterances unfold. The critical utterances were assertions (e.g., “I have a credit card”) produced in three sequential environments, affording different ascriptions of the action; as an answer to a question, as an indirect rejection, or as a pre-offer. In each case the assertion is used as a vehicle for some other action, and it is “part of competent membership in the society/culture and being a competent interactant to analyze assertions of this sort for what (else) they may be doing at this moment, at this juncture of the interaction, in this specific sequential context” (Schegloff, 2007, p. 35). We tapped into this competence by exploring the time-course of action recognition, using the following rationale: If comprehension at the action level takes place early in the incoming utterance, enabling quick turn transitions, we should find ERP differences between the actions at the first word or the verb. On the other hand, if action comprehension requires analysis of the complete utterance, ERP effects are expected to predominantly occur at the final word. The results indicate that recipients tune in to the action of an utterance as early as 400 ms after first word onset. However, the time-course of speech act comprehension depends on the specific action. Rejections elicit an ERP effect at the first word and the verb, but not at the final word. We take this to show that when the utterance is a second pair part in an adjacency pair sequence – as was the case in the rejections – recipients seem to recognize the action before the final word, even though the final word is a critical part of the propositional content. The pre-offers, on the other hand, do elicit an ERP component at the end of the utterance, suggesting that analysis of the entire turn is needed to understand the action. These findings indicate that utterance interpretation is sensitive to specific actions and how they are organized in sequences. By bridging conversation analysis and neuropragmatics we have come one step closer to understanding language comprehension in its natural habitat, where action is omnirelevant (Schegloff, 1995)

    Prelims

    Get PDF
    The classic account of language is that language processing occurs in isolation from other cognitive systems, like perception, motor action, and emotion. The central theme of this paper is the relationship between a participant’s emotional state and language comprehension. Does emotional context affect how we process neutral words? Recent studies showed that processing of word meaning – traditionally conceived as an automatic process – is affected by emotional state. The influence of emotional state on syntactic processing is less clear. One study reported a mood-related P600 modulation, while another study did not observe an effect of mood on syntactic processing. The goals of this study were: First, to clarify whether and if so how mood affects syntactic processing. Second, to shed light on the underlying mechanisms by separating possible effects of mood from those of attention on syntactic processing. Event-related potentials (ERPs) were recorded while participants read syntactically correct or incorrect sentences. Mood (happy vs. sad) was manipulated by presenting film clips. Attention was manipulated by directing attention to syntactic features vs. physical features. The mood induction was effective. Interactions between mood, attention and syntactic correctness were obtained, showing that mood and attention modulated P600. The mood manipulation led to a reduction in P600 for sad as compared to happy mood when attention was directed at syntactic features. The attention manipulation led to a reduction in P600 when attention was directed at physical features compared to syntactic features for happy mood. From this we draw two conclusions: First, emotional state does affect syntactic processing. We propose mood-related differences in the reliance on heuristics as the underlying mechanism. Second, attention can contribute to emotion-related ERP effects in syntactic language processing. Therefore, future studies on the relation between language and emotion will have to control for effects of attentio
    corecore